LLE Icon

LLE (Low Light Enhancement)

AI-based Low-Light Enhancement inference API for Windows x64.



Overview

LLE is a low-light image enhancement library that provides an inference API for an AI model trained in Python.

NuGet Packages (Native vs Managed)

Both packages target Windows x64. GPU inference requires a compatible NVIDIA GPU environment (see below).

Sample Result (Before / After)

A quick visual comparison using the bundled sample images.

Low-light input Enhanced output
low_61 enhanced_low_61

If images don’t render on NuGet.org, switch these URLs to raw links:

Model Support (Now / Next)

LLE is not a one-off release. Model quality and available variants may improve through updates.

Training Scripts

Dataset


Platform


Runtime (CPU / CUDA)

CPU

CUDA (GPU)

If CUDA inference fails to load (e.g., DLL not found / entry point not found), the most common causes are:

Note: NVIDIA downloads may require an NVIDIA Developer account login.

CPU + CUDA Mixed Usage (Important)


Development Environment


Runtime Dependency (Required)

This library requires a separate redistribution package to run (native runtime DLLs, etc.). Download and install the redistribution package before using LLE.


NuGet Packages

LLE is not a “single one-off release”. The NuGet packages can be updated over time (bug fixes, performance improvements, new runtime variants, model upgrades).

Current / planned package list:

The list may expand (e.g., different CUDA versions) and existing packages may receive updates.


Installation

C++ (native)

Package Manager

Install-Package LLE.Native.Cu118

.NET CLI

dotnet add package LLE.Native.Cu118

.NET / C# (managed wrapper)

Package Manager

Install-Package LLE.Managed.Cuda118

.NET CLI

dotnet add package LLE.Managed.Cuda118

Usage in C++

#include <lle/memoryPool.h>
#include <lle/image.h>
#include <lle/lle.h>
#include <iostream>

int main()
{
    try {
        // create lle instance
        auto lle = lleapi::v1::lle::create();
        // load Zero-DCE++ model on CPU
        // (also supports loading an ONNX model from a file path)
        lle->setup(lleapi::v1::dlType::zeroDCE, lleapi::v1::device::cpu);
        // load color image
        auto input = lleapi::v1::image::imread(
            "C://github//dataset//lol_dataset//our485//low//low_15.png",
            lleapi::v1::colorType::color
        );
        // predict
        auto output = lle->predict(input);
        // save result image
        lleapi::v1::image::imwrite(
            "C://github//LLE//LLE//x64//Debug//result1.jpg",
            output
        );
        // cleanup internal instance
        lle->shutdown();
    }
    catch (std::exception ex) {
        std::cout << ex.what() << std::endl;
    }
}

Usage in C#

namespace ManagedTest
{
    internal class Program
    {
        static void Main(string[] args)
        {
            try
            {
                //// Create LLE Instance
                var lle = LLEAPI.V1.LLE.Create();
                //// load zerodce model and load on cpu
                //// its also support onnx model load from path
                lle.Setup(LLEAPI.V1.DlType.ZeroDCE, LLEAPI.V1.Device.Cpu);
                //// load color image
                var input = LLEAPI.V1.Image.Imread(
                    "C://github//dataset//lol_dataset//our485//low//low_15.png", 
                    LLEAPI.V1.ColorType.Color);
                //// predict 
                var output = lle.Predict(input);
                //// save image file on disk
                LLEAPI.V1.Image.Imwrite(
                    "C://github//LLE//LLE//x64//Debug//result1.jpg",
                    output);
                //// cleanup internal instance
                lle.Shutdown();
            }
            catch (Exception ex)
            {
                System.Diagnostics.Debug.WriteLine(ex.ToString());
            }
        }
    }
}


Roadmap


Research References / Acknowledgements

This project uses ideas and/or model architectures from academic research. If you use LLE in research, demos, or publications, please consider citing the original papers.

We sincerely thank the authors and contributors of these works for advancing low-light enhancement research:

Note: Please also comply with the licenses/terms of any upstream code, weights, and third-party libraries you use or redistribute.


License

This project is licensed under the MIT License (for the LLE source code).

Third-party notices (important)

This distribution may include third-party components and/or binaries.
Those components are NOT covered by the MIT License and remain subject to their respective licenses/terms.

Included third-party license texts are provided under the licenses/ folder:

By using this package, you agree to comply with all applicable third-party license terms in addition to the MIT License.


MIT License

Copyright (c) 2025–present gellston

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.